翻訳と辞書
Words near each other
・ Slice knot
・ Slice of Brooklyn
・ Slice of Doom 1999–2002
・ Slice of Heaven
・ Slice of life
・ Slice of SciFi
・ Slice preparation
・ Slice sampling
・ Slice theorem
・ Slice theorem (differential geometry)
・ Slice to Sharp
・ Slice, Inc.
・ Sliced
・ Sliced bread
・ Sliced fish soup
Sliced inverse regression
・ Sliced Porosity Block - CapitaLand Raffles City Chengdu
・ Slicer (disambiguation)
・ Slices
・ Slices of Life
・ Slichtenhorst
・ Slichter Foreland
・ Slicing
・ Slicing (interface design)
・ Slicing Grandpa
・ Slicing Petri nets
・ Slick
・ Slick & Rose
・ Slick (album)
・ Slick (magazine format)


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Sliced inverse regression : ウィキペディア英語版
Sliced inverse regression

Sliced inverse regression (SIR) is a tool for dimension reduction in the field of multivariate statistics.
In statistics, regression analysis is a popular way of studying the relationship between a response variable ''y'' and its explanatory variable \underline, which is a ''p''-dimensional vector. There are several approaches which come under the term of regression. For example parametric methods include multiple linear regression; non-parametric techniques include local smoothing.
With high-dimensional data (as ''p'' grows), the number of observations needed to use local smoothing methods escalates exponentially. Reducing the number of dimensions makes the operation computable. Dimension reduction aims to show only the most important directions of the data. SIR uses the inverse regression curve, E(\underline\,|\,y) to perform a weighted principal component analysis, with which one identifies the effective dimension reducing directions.
This article first introduces the reader to the subject of dimension reduction and how it is performed using the model here. There is then a short review on inverse regression, which later brings these pieces together.
==Model==

Given a response variable \,Y and a (random) vector X \in \R^p of explanatory variables, SIR is based on the model
Y=f(\beta_1^\top X,\ldots,\beta_k^\top X,\varepsilon)\quad\quad\quad\quad\quad(1)
where \beta_1,\ldots,\beta_k are unknown projection vectors. \,k is an unknown number (the dimensionality of the space we try to reduce our data to) and, of course, as we want to reduce dimension, smaller than \,p. \;f is an unknown function on \R^, as it only depends on \,k arguments, and \varepsilon is the error with E()=0 and finite variance \sigma^2 . The model describes an ideal solution, where \,Y depends on X \in \R^p only through a \,k dimensional subspace. I.e. one can reduce to dimension of the explanatory variable from \,p to a smaller number \,k without losing any information.
An equivalent version of \,(1) is: the conditional distribution of \,Y given \, X depends on \, X only through the \,k dimensional random vector (\beta_1^\top X,\ldots,\beta_k^\top X). This perfectly reduced vector can be seen as informative as the original \,X in explaining \, Y .
The unknown \,\beta_i's are called the ''effective dimension reducing directions'' (EDR-directions). The space that is spanned by these vectors is denoted the ''effective dimension reducing space'' (EDR-space).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Sliced inverse regression」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.